Goto

Collaborating Authors

 Nairobi Province


Windscribe review: Despite the annoyances, it has the right idea

Engadget

The first step is always to figure out how easy or hard the VPN is to use. Windscribe and other VPNs are important tools, but you'll never use them if the UI gets in the way. I tested Windscribe's desktop apps on Windows and Mac, its mobile apps on iOS and Android and its Chrome and Firefox browser extensions. To start with, let me say that installing Windscribe is a breeze no matter where you do it. The downloaders and installers handle their own business, only requiring you to grant a few permissions. The apps arrive on your system ready to use out of the box.


Data Flows and Colonial Regimes in Africa: A Critical Analysis of the Colonial Futurities Embedded in AI Ecosystems

A, Ndaka., F, Avila-Acosta., H, Mbula-Ndaka., C, Amera., S, Chauke., E, Majiwa.

arXiv.org Artificial Intelligence

Data Flows and Colonial Regimes in Africa: A Critical Analysis of the Colonial Futurities Embedded in AI Recommendation Algorithms Angella Ndaka, University of Witwatersrand, Johannesburg, South Africa Fátima Ávila - Acosta, Berlin Graduate School of Social Sciences at Humboldt University, Berlin, Germany Harnred Mbula, Centre for Epistemic Justice, Nairobi, Kenya Christine Amera, Centre for Epistemic Justice, Nairobi Kenya Sandra Tiyani Chauke University of Pretoria, South Africa Eucabeth Majiwa Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya Abstract In the last few years, Africa has experienced growth in a thriving ecosystem of Artificial Intelligence (AI) technologies and systems, developed and promoted by both local and global technology players. While the sociotechnical imaginaries about these syst ems promote AI as critical to achiev ing Africa's sustainable development agenda, some of them have subtly permeated society, recreating new values, cultures, practices, and histories that threaten to marginalize minority groups in the region. Africa predominantly frames AI as an imaginary solution to address complex social challenges; however, the narrative subtly ignores deeper power - related concerns, including data governance, embedded algorithmic colonialism, and the exploitation that propag ates new digital colonial sites. However, the development of current AI ethics in Africa is in its infancy and predominantly framed through lenses of Western perspective, with the social and ethical impacts of the AI innovations and application on African epistemologies and worldviews not prioritized. To ensure that people on the African continent leverage the benefits of AI, these social and ethical impacts o f AI need to be critically and explicitly considered and addressed. This chapter will therefore seek to frame the elemental and invisible problems of AI and big data in the African context by examining digital sites and infrastructure through the lens of power and interests. It will present reflections on how these sites are using AI recommendation algorithms to recreate new digital societies in the region, how they have the potential to propagate algorithmic colonialism and negative gender norms, and what this means for the regional sustainable development agenda. The chapter proposes adopting business models that embrace response - ability and consider the existence of alternative socio - material worlds of AI. These reflections will mainly come from ongoing discussions with Kenyan social media users in this author's user space talks, which take place every month. Keywords: Artificial Intelligence; algorithmic colonialism; Data; response - ability; digital sites Section 1: Introduction The growing global interest, combined with rising investments in AI skilling and infrastructure development, is a key driver of the expanding landscape of AI technologies and systems across Africa.



On the notion of missingness for path attribution explainability methods in medical settings: Guiding the selection of medically meaningful baselines

Geiger, Alexander, Wagner, Lars, Rueckert, Daniel, Wilhelm, Dirk, Jell, Alissa

arXiv.org Artificial Intelligence

The explainability of deep learning models remains a significant challenge, particularly in the medical domain where interpretable outputs are critical for clinical trust and transparency. Path attribution methods such as Integrated Gradients rely on a baseline representing the absence of relevant features ("missingness"). Commonly used baselines, such as all-zero inputs, are often semantically meaningless, especially in medical contexts. While alternative baseline choices have been explored, existing methods lack a principled approach to dynamically select baselines tailored to each input. In this work, we examine the notion of missingness in the medical context, analyze its implications for baseline selection, and introduce a counterfactual-guided approach to address the limitations of conventional baselines. We argue that a generated counterfactual (i.e. clinically "normal" variation of the pathological input) represents a more accurate representation of a meaningful absence of features. We use a Variational Autoencoder in our implementation, though our concept is model-agnostic and can be applied with any suitable counterfactual method. We evaluate our concept on three distinct medical data sets and empirically demonstrate that counterfactual baselines yield more faithful and medically relevant attributions, outperforming standard baseline choices as well as other related methods.



Europe Pledges 600 Billion for Clean Energy Projects in Africa

WIRED

The EU's Global Gateway plan is challenging China's Belt and Road Initiative to influence Africa, by providing funding that will expand access to electricity. Nearly 600 million Africans--half the continent's population--are without electricity, largely because of the continent's limited distribution network, and Africans make up the vast majority of those worldwide without electricity access. But the European Union wants to change this. At the end of September, the president of the European Commission, Ursula von der Leyen, announced a €545 million ($636 million) investment package to support renewable energy and electrification in Africa. New EU-funded projects will include a high-voltage transmission line in Côte d'Ivoire, the electrification of hundreds of rural communities in Cameroon, the exploitation of wind and hydro energy in Lesotho, and the installation of mini-grids in remote areas of Madagascar.


Vacuum Spiker: A Spiking Neural Network-Based Model for Efficient Anomaly Detection in Time Series

Vázquez, Iago Xabier, Sedano, Javier, Afzal, Muhammad, García-Vico, Ángel Miguel

arXiv.org Artificial Intelligence

Anomaly detection is a key task across domains such as industry, healthcare, and cybersecurity. Many real-world anomaly detection problems involve analyzing multiple features over time, making time series analysis a natural approach for such problems. While deep learning models have achieved strong performance in this field, their trend to exhibit high energy consumption limits their deployment in resource-constrained environments such as IoT devices, edge computing platforms, and wearables. To address this challenge, this paper introduces the \textit{Vacuum Spiker algorithm}, a novel Spiking Neural Network-based method for anomaly detection in time series. It incorporates a new detection criterion that relies on global changes in neural activity rather than reconstruction or prediction error. It is trained using Spike Time-Dependent Plasticity in a novel way, intended to induce changes in neural activity when anomalies occur. A new efficient encoding scheme is also proposed, which discretizes the input space into non-overlapping intervals, assigning each to a single neuron. This strategy encodes information with a single spike per time step, improving energy efficiency compared to conventional encoding methods. Experimental results on publicly available datasets show that the proposed algorithm achieves competitive performance while significantly reducing energy consumption, compared to a wide set of deep learning and machine learning baselines. Furthermore, its practical utility is validated in a real-world case study, where the model successfully identifies power curtailment events in a solar inverter. These results highlight its potential for sustainable and efficient anomaly detection.


A global log for medical AI

Noori, Ayush, Rodman, Adam, Karthikesalingam, Alan, Mateen, Bilal A., Longhurst, Christopher A., Yang, Daniel, deBronkart, Dave, Galea, Gauden, Wolf, Harold F. III, Waxman, Jacob, Mandel, Joshua C., Rotich, Juliana, Mandl, Kenneth D., Mustafa, Maryam, Miles, Melissa, Shah, Nigam H., Lee, Peter, Korom, Robert, Mahoney, Scott, Hain, Seth, Wong, Tien Yin, Mundel, Trevor, Natarajan, Vivek, Dagan, Noa, Clifton, David A., Balicer, Ran D., Kohane, Isaac S., Zitnik, Marinka

arXiv.org Artificial Intelligence

Modern computer systems often rely on syslog, a simple, universal protocol that records every critical event across heterogeneous infrastructure. However, healthcare's rapidly growing clinical AI stack has no equivalent. As hospitals rush to pilot large language models and other AI-based clinical decision support tools, we still lack a standard way to record how, when, by whom, and for whom these AI models are used. Without that transparency and visibility, it is challenging to measure real-world performance and outcomes, detect adverse events, or correct bias or dataset drift. In the spirit of syslog, we introduce MedLog, a protocol for event-level logging of clinical AI. Any time an AI model is invoked to interact with a human, interface with another algorithm, or act independently, a MedLog record is created. This record consists of nine core fields: header, model, user, target, inputs, artifacts, outputs, outcomes, and feedback, providing a structured and consistent record of model activity. To encourage early adoption, especially in low-resource settings, and minimize the data footprint, MedLog supports risk-based sampling, lifecycle-aware retention policies, and write-behind caching; detailed traces for complex, agentic, or multi-stage workflows can also be captured under MedLog. MedLog can catalyze the development of new databases and software to store and analyze MedLog records. Realizing this vision would enable continuous surveillance, auditing, and iterative improvement of medical AI, laying the foundation for a new form of digital epidemiology.


Artificially Fluent: Swahili AI Performance Benchmarks Between English-Trained and Natively-Trained Datasets

Jaffer, Sophie, Sayer, Simeon

arXiv.org Artificial Intelligence

As large language models (LLMs) expand multilingual capabilities, questions remain about the equity of their performance across languages. While many communities stand to benefit from AI systems, the dominance of English in training data risks disadvantaging non-English speakers. To test the hypothesis that such data disparities may affect model performance, this study compares two monolingual BERT models: one trained and tested entirely on Swahili data, and another on comparable English news data. To simulate how multilingual LLMs process non-English queries through internal translation and abstraction, we translated the Swahili news data into English and evaluated it using the English-trained model. This approach tests the hypothesis by evaluating whether translating Swahili inputs for evaluation on an English model yields better or worse performance compared to training and testing a model entirely in Swahili, thus isolating the effect of language consistency versus cross-lingual abstraction. The results prove that, despite high-quality translation, the native Swahili-trained model performed better than the Swahili-to-English translated model, producing nearly four times fewer errors: 0.36% vs. 1.47% respectively. This gap suggests that translation alone does not bridge representational differences between languages and that models trained in one language may struggle to accurately interpret translated inputs due to imperfect internal knowledge representation, suggesting that native-language training remains important for reliable outcomes. In educational and informational contexts, even small performance gaps may compound inequality. Future research should focus on addressing broader dataset development for underrepresented languages and renewed attention to multilingual model evaluation, ensuring the reinforcing effect of global AI deployment on existing digital divides is reduced.


Localizing Persona Representations in LLMs

Cintas, Celia, Rateike, Miriam, Miehling, Erik, Daly, Elizabeth, Speakman, Skyler

arXiv.org Artificial Intelligence

We present a study on how and where personas -- defined by distinct sets of human characteristics, values, and beliefs -- are encoded in the representation space of large language models (LLMs). Using a range of dimension reduction and pattern recognition methods, we first identify the model layers that show the greatest divergence in encoding these representations. We then analyze the activations within a selected layer to examine how specific personas are encoded relative to others, including their shared and distinct embedding spaces. We find that, across multiple pre-trained decoder-only LLMs, the analyzed personas show large differences in representation space only within the final third of the decoder layers. We observe overlapping activations for specific ethical perspectives -- such as moral nihilism and utilitarianism -- suggesting a degree of polysemy. In contrast, political ideologies like conservatism and liberalism appear to be represented in more distinct regions. These findings help to improve our understanding of how LLMs internally represent information and can inform future efforts in refining the modulation of specific human traits in LLM outputs. Warning: This paper includes potentially offensive sample statements.